Crate tokio_rustls

source ·
Expand description

Asynchronous TLS/SSL streams for Tokio using Rustls.

Why do I need to call poll_flush?

Most TLS implementations will have an internal buffer to improve throughput, and rustls is no exception.

When we write data to TlsStream, we always write rustls buffer first, then take out rustls encrypted data packet, and write it to data channel (like TcpStream). When data channel is pending, some data may remain in rustls buffer.

tokio-rustls To keep it simple and correct, TlsStream will behave like BufWriter. For TlsStream<TcpStream>, this means that data written by poll_write is not guaranteed to be written to TcpStream. You must call poll_flush to ensure that it is written to TcpStream.

You should call poll_flush at the appropriate time, such as when a period of poll_write write is complete and there is no more data to write.

Why don’t we write during poll_read?

We did this in the early days of tokio-rustls, but it caused some bugs. We can solve these bugs through some solutions, but this will cause performance degradation (reverse false wakeup).

And reverse write will also prevent us implement full duplex in the future.

see https://github.com/tokio-rs/tls/issues/40

Why can’t we handle it like native-tls?

When data channel returns to pending, native-tls will falsely report the number of bytes it consumes. This means that if data written by poll_write is not actually written to data channel, it will not return Ready. Thus avoiding the call of poll_flush.

but which does not conform to convention of AsyncWrite trait. This means that if you give inconsistent data in two poll_write, it may cause unexpected behavior.

see https://github.com/tokio-rs/tls/issues/41

Re-exports

Modules

Structs

Enums